38 research outputs found

    Higher-Order Improvements of the Sieve Bootstrap for Fractionally Integrated Processes

    Full text link
    This paper investigates the accuracy of bootstrap-based inference in the case of long memory fractionally integrated processes. The re-sampling method is based on the semi-parametric sieve approach, whereby the dynamics in the process used to produce the bootstrap draws are captured by an autoregressive approximation. Application of the sieve method to data pre-filtered by a semi-parametric estimate of the long memory parameter is also explored. Higher-order improvements yielded by both forms of re-sampling are demonstrated using Edgeworth expansions for a broad class of statistics that includes first- and second-order moments, the discrete Fourier transform and regression coefficients. The methods are then applied to the problem of estimating the sampling distributions of the sample mean and of selected sample autocorrelation coefficients, in experimental settings. In the case of the sample mean, the pre-filtered version of the bootstrap is shown to avoid the distinct underestimation of the sampling variance of the mean which the raw sieve method demonstrates in finite samples, higher order accuracy of the latter notwithstanding. Pre-filtering also produces gains in terms of the accuracy with which the sampling distributions of the sample autocorrelations are reproduced, most notably in the part of the parameter space in which asymptotic normality does not obtain. Most importantly, the sieve bootstrap is shown to reproduce the (empirically infeasible) Edgeworth expansion of the sampling distribution of the autocorrelation coefficients, in the part of the parameter space in which the expansion is valid

    Bias Reduction of Long Memory Parameter Estimators via the Pre-filtered Sieve Bootstrap

    Full text link
    This paper investigates the use of bootstrap-based bias correction of semi-parametric estimators of the long memory parameter in fractionally integrated processes. The re-sampling method involves the application of the sieve bootstrap to data pre-filtered by a preliminary semi-parametric estimate of the long memory parameter. Theoretical justification for using the bootstrap techniques to bias adjust log-periodogram and semi-parametric local Whittle estimators of the memory parameter is provided. Simulation evidence comparing the performance of the bootstrap bias correction with analytical bias correction techniques is also presented. The bootstrap method is shown to produce notable bias reductions, in particular when applied to an estimator for which analytical adjustments have already been used. The empirical coverage of confidence intervals based on the bias-adjusted estimators is very close to the nominal, for a reasonably large sample size, more so than for the comparable analytically adjusted estimators. The precision of inferences (as measured by interval length) is also greater when the bootstrap is used to bias correct rather than analytical adjustments.Comment: 38 page

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Business Forecasting with Exponential Smoothing: Computation of Prediction Intervals

    No full text
    The problem considered in this paper is how to find reliable prediction intervals with simple exponential smoothing and trend corrected exponential smoothing. Methods for constructing prediction intervals based on linear approximation and bootstrapping are proposed. A Monte Carlo simulation study, in which the proposed methods are compared, indicates that the most reliable intervals can be obtained with a parametric form of the bootstrap method. An application of the method to predicting Malaysian GNP per capita is considered

    The finite-sample properties of autoregressive approximations of fractionally-integrated and non-invertible processes

    No full text
    This paper investigates the empirical properties of autoregressive approximations to two classes of process for which the usual regularity conditions do not apply; namely the non-invertible and fractionally integrated processes considered in Poskitt (2006). In that paper the theoretical consequences of fitting long autoregressions under regularity conditions that allow for these two situations was considered, and convergence rates for the sample autocovariances and autoregressive coefficients established. We now consider the finite-sample properties of alternative estimators of the AR parameters of the approximating AR(h) process and corresponding estimates of the optimal approximating order h. The estimators considered include the Yule-Walker, Least Squares, and Burg estimators

    The Locally Unbiased Two-Sided Durbin-Watson Test

    No full text
    An algorithm for. constructing locally unbiased two-sided critical regions for the Durbin-Watson test is presented. It can also be applied to other two-sided tests. Empirical calculations suggest that, at least for the Durbin-Watson test, the current practice of using equal-tailed critical values yields approximately locally unbiased critical regions

    Business forecasting with exponential smoothing: computation of prediction intervals

    No full text
    The problem considered in this paper is how to find reliable prediction intervals with simple exponential smoothing and trend corrected exponential smoothing. Methods for constructing prediction intervals based on linear approximation and bootstrapping are proposed. A Monte Carlo simulation study, in which the proposed methods are compared, indicates that the most reliable intervals can be obtained with a parametric form of the bootstrap method. An application of the method to predicting Malaysian GNP per capita is considered

    The Use of Information Criteria for Model Selection Between Models with Equal Numbers of Parameters

    No full text
    Information criteria (IC) are used widely to choose between competing alternative models. When these models have the same number of parameters, the choice simplifies to the model with the largest maximized loglikelihood. By studying the problem of selecting either first-order autoregressive or first-order moving average disturbances in the linear regression model, we present clear evidence that a particular model can be unfairly favoured because of the shape or functional form of its log-likelihood. We also find that the presence of nuisance parameters can adversely affect the probabilities of correct selection. The use of Monte Carlo methods to find more appropriate penalties and the application of IC procedures to marginal likelihoods rather than conventional likelihoods is found to result in improved selection probabilities in small samples
    corecore